Introduction to Open Data Science - Course Project

About the project

Write a short description about the course and add a link to your GitHub repository here. This is an R Markdown (.Rmd) file so you should use R Markdown syntax.

This is course diary of the course Introduction to Open Data Science. My repository can be found here: https://github.com/anterogradinen/IODS-project

# This is a so-called "R chunk" where you can write R code.

date()
## [1] "Mon Dec 11 21:37:34 2023"

Chapter 1

Assignment 1: Tasks and Instructions copied from Moodle.

  1. DONE. Check that you have everything installed and created according to the instructions. You should have a GitHub repository, a course diary web page (also on GitHub, in a different address) and the IODS-project started on RStudio using the course templates.  (3 p)

  2. DONE. Open the file chapter1.Rmd located in your IODS-project folder with RStudio. Just write some of your thoughts about this course freely in the file, e.g., How are you feeling right now? What do you expect to learn? Where did you hear about the course?

    1. Feelings. I am excited of this course! I am already with one R-course but this course takes in consideration open data aspect which is really interesting and future skill.

    2. Expectations. I really hope that this and other R-course I have support my learning process to become starting data-analyst. Also the aspect of open data and its possibilities really inspires me.

    3. Where did I hear about this course? I have came across this course coupe of times but now I really have the possibility to participate to this.

  3. Also reflect on your learning experiences with the R for Health Data Science book and the Exercise Set 1:

    1. How did it work as a “crash course” on modern R tools and using RStudio?

      1. I found the book very useful, because it introduced R and RStudio bit different than book “R for Data Science (2e)” by Wickham and Çetinkaya-Rundel (https://r4ds.hadley.nz/), which I have read previously during this autumn. Both books support each other.

      2. Although I felt that first five chapters in a one week was bit too much. (I am currently writing this after chapter 3.5.)

      3. First I wasn’t sure if the Excercise1-material was needed because the book provided the code which can be copy-pasted, but the exercise material saved lot of time when you did not have to download all the example data and copy-paste every example script.

    2. Which were your favorite topics?

      1. I have found chapters 3.3., 3.4. and 3.5. so far most useful. (I am currently writing this after chapter 3.5.)
    3. Which topics were most difficult?

      1. I think that that internalizing how to use group_by and summarise may take some time. Also ungrouping was something I need to return later on.
    4. Some other comments on the book and our new approach of getting started with R Markdown etc.? (All this is just “warmup” to get well started and learn also the technical steps needed each week in Moodle, that is, submit and review.

    5. We will start more serious work next week! You can already look at the next topic in Moodle and begin working with the Exercise Set 2...)

  4. DONE Also add in the file a link to your GitHub repository (that you created earlier): https://github.com/anterogradinen/IODS-project

  5. You can immediately start to learn the basics of the R Markdown syntax that we will use for writing the exercise reports: Try, for example, highlighting parts of your text, adding some headers, lists, links etc. Hint: Use the R Markdown Reference Guide or cheatsheet (both found from the RStudio Help). This is an excellent quick (1 min) tour of R Markdown, please watch: https://rmarkdown.rstudio.com/lesson-1.html

  6. DONE. Remember to save your chapter1.Rmd file. (5 p)

  7. DONE Open the index.Rmd file with RStudio. At the beginning of the file, in the YAML options below the ‘title’ option, add the following option: author: “Your Name”. Save the file and “knit” the document (there’s a button for that) as an HTML page. This will also update the index.html file. (2 p)

  8. DONE. (This point added in 2022 - let’s hope it works similarly in 2023!)
    To make the connection between RStudioand GitHub as smooth as possible, you should create a Personal Access Token (PAT).

    The shortest way to proceed is to follow the steps below. (Source: https://happygitwithr.com/https-pat.html)

    Execute these R commands in the RStudio Console (below the Editor):

    install.packages("usethis")
    usethis::create_github_token()

    GitHub website will open in your browser. Log in with your GitHub credentials.

    • Write a Note in the box, for example “IODS Project”.

    • Select an Expiration time for your PAT, e.g., 50 days.

    • The pre-selected scopes “repo”, “workflow”, “gist”, and “user” are OK.

    • Press “Generate token” and copy the generated PAT to your clipboard. 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


    Return to RStudio and continue in the Console:

    gitcreds::gitcreds_set()
    • WAIT until a prompt “Enter password or token:” appears.

    • Paste your PAT to the prompt and press Enter.

    Now you should be able to work with GitHub, i.e., push and pull from RStudio. Congrats!! (5 p)

  9. Upload the changes to GitHub (the version control platform) from RStudio.

    There are a few phases (don’t worry: all this will become an easy routine for you very soon!):

  10. DEMO

    • First, select the “Git” tab in the upper right corner of RStudio. You will see a list of modified files.

    • Select “Commit”. It will open a new “Review Changes” window showing more detailed information of the changes you have made in each file since the previous version.

    • Tick the box in the front of each file (be patient, it takes some time for the check to appear).

    • Write a small commit message (there’s a box for that) that describes your changes briefly. After this task is completed (not yet), both the changes and the message will be seen on GitHub. (Note: It is useful to make commits often and even on small changes. Commits are at the heart of the version control system, as a single commit represents a single version of the file.)

    • Press “Commit”. (RStudio uses Git to implement the changes included in the commit.)

    • Press “Push”. (RStudio uses Git to upload the changes to your GitHub repository.)

    • Now you can close the “Review Changes” window of RStudio. Good job!! (5 p)


  11. After a few moments, go to your GitHub repository at
    https://github.com/anterogradinen/IODS-project
    to see what has changed (please be patient and refresh the page).

    Also visit your course diary that has been automatically been updated at
    https://anterogradinen.github.io/IODS-project/ and make sure you see the changes there as well.

After completing the tasks above you are ready to submit your Assignment for the review (using the Moodle Workshop below). Have the two links (your GitHub repository and your course diary) ready! Remember to get back there when the Review phase begins (see course schedule).

Have fun and don’t be afraid to ask for help using the Moodle discussion forum.



Chapter 2: Assignment 2: Analysis (max 15 points)

This weeks assignment was a tough one! Also I had last minute tech issues with knitting. Apologies if my code and text is difficult to read. I tried to use code chunks (like in the “date()” -part) in parts 1-5 but could not knit whole thing so the code is there in a regular text.

First I read the material and then started doing exercises and only then doing Assignment 2 itself. I feel that this was not the most time efficient way to learn. But this weeks learning curve was quite steep! I think I have learned quite lot about linear models this week. Also I found the data and data wrangling exercise really useful regarding my own research topic.

Nevertheless I think I need to read the material again because I am not 100% confident when looking model summaries and tables. I also fear that if I do not internalise these topics well enough, the rest of the course will be torment or I can drop out.

date()
## [1] "Mon Dec 11 21:37:34 2023"

1. Read the data & describe the dataset briefly

library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(finalfit)
students2014_data <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/learning2014.txt", sep = ",", header = TRUE)

View(students2014_data)

# describe the dataset briefly

glimpse(students2014_data)
## Rows: 166
## Columns: 7
## $ gender   <chr> "F", "M", "F", "M", "M", "F", "M", "F", "M", "F", "M", "F", "…
## $ age      <int> 53, 55, 49, 53, 49, 38, 50, 37, 37, 42, 37, 34, 34, 34, 35, 3…
## $ attitude <dbl> 3.7, 3.1, 2.5, 3.5, 3.7, 3.8, 3.5, 2.9, 3.8, 2.1, 3.9, 3.8, 2…
## $ deep     <dbl> 3.583333, 2.916667, 3.500000, 3.500000, 3.666667, 4.750000, 3…
## $ stra     <dbl> 3.375, 2.750, 3.625, 3.125, 3.625, 3.625, 2.250, 4.000, 4.250…
## $ surf     <dbl> 2.583333, 3.166667, 2.250000, 2.250000, 2.833333, 2.416667, 1…
## $ points   <int> 25, 12, 24, 10, 22, 21, 21, 31, 24, 26, 31, 31, 23, 25, 21, 3…
# 166 respondents: 166 rows and 7 columns

ff_glimpse(students2014_data)
## $Continuous
##             label var_type   n missing_n missing_percent mean  sd  min
## age           age    <int> 166         0             0.0 25.5 7.8 17.0
## attitude attitude    <dbl> 166         0             0.0  3.1 0.7  1.4
## deep         deep    <dbl> 166         0             0.0  3.7 0.6  1.6
## stra         stra    <dbl> 166         0             0.0  3.1 0.8  1.2
## surf         surf    <dbl> 166         0             0.0  2.8 0.5  1.6
## points     points    <int> 166         0             0.0 22.7 5.9  7.0
##          quartile_25 median quartile_75  max
## age             21.0   22.0        27.0 55.0
## attitude         2.6    3.2         3.7  5.0
## deep             3.3    3.7         4.1  4.9
## stra             2.6    3.2         3.6  5.0
## surf             2.4    2.8         3.2  4.3
## points          19.0   23.0        27.8 33.0
## 
## $Categorical
##         label var_type   n missing_n missing_percent levels_n levels
## gender gender    <chr> 166         0             0.0        2      -
##        levels_count levels_percent
## gender            -              -

Comments

Other variables:

students2014_data |> count(gender)

2. Graphical overview

Show a graphical overview of the data and show summaries of the variables in the data. Describe and interpret the outputs, commenting on the distributions of the variables and the relationships between them.

library(GGally) 
## Loading required package: ggplot2
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
library(ggplot2)

# "create a plot matrix with ggpairs()"

p <- ggpairs(students2014_data, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))

# "draw the plot"

p

library(GGally) library(ggplot2)

“create a plot matrix with ggpairs()”

p <- ggpairs(students2014_data, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap(“facethist”, bins = 20)))

“draw the plot”

p

summaries about the variables stated above in part 1.

Comments based on graphical overview.

3. Regression model

create a regression model with multiple explanatory variables

my_model3 <- lm(points ~ attitude + stra + gender, data = students2014_data)

“print out a summary of the model”

summary(my_model3)

Explain and interpret the statistical test related to the model parameters.

my_model4 <- lm(points ~ attitude, data = students2014_data) summary(my_model4)

Comments about my_model4:

4. explaining the relationship between explanatory variables and the target variable

“Using a summary of your fitted model, explain the relationship between the chosen explanatory variables and the target variable (interpret the model parameters).”

Explain and interpret the multiple R-squared of the model. (0-3 points)

qplot(attitude, points, data = students2014_data) + geom_smooth(method = “lm”)

summary(my_model3)

Explain and interpret the multiple R-squared of the model.

5. Diagnostic plots

Produce the following diagnostic plots: Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage.

Explain the assumptions of the model and interpret the validity of those assumptions based on the diagnostic plots. (0-3 points)

my_model3 <- lm(points ~ attitude + gender, data = students2014_data) plot(my_model3, which = c(1,2,5))

Explination and interpretation: - The observations are normally distributed around the fitted line because a normal Q-Q plot shows residuals in line with the straight line. - this is good sign, meaning that residuals are observations are equally distributed aroung the linear model line, which can be also visually seen in qplot above.


Assignment 3

Analysis

date() #testing does the code chunk work (had issues with assignment 2)
## [1] "Mon Dec 11 21:37:51 2023"

As mentioned above, I had some tech issues with last knitting and code chunk. As I did with last week’s assignment, I did have had difficult time also with this weeks material and assignment. Unfortunately I did not manage to do every task required. I am bit worried will I fall behind in this course. Nevertheless I found the exercise material really interesting.

2

Read the joined student alcohol consumption data into R either from your local folder (if you completed the Data wrangling part) or from this url (in case you got stuck with the Data wrangling part):

https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/alc.csv

(In the above linked file, the column separator is a comma and the first row includes the column names). Print out the names of the variables in the data and describe the data set briefly, assuming the reader has no previous knowledge of it. There is information related to the data here. (0-1 point)

library(dplyr)
library(tidyr)
library(finalfit)
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ forcats   1.0.0     ✔ readr     2.1.4
## ✔ lubridate 1.9.2     ✔ stringr   1.5.0
## ✔ purrr     1.0.2     ✔ tibble    3.2.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
alc3 <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/alc.csv", sep = ",", header = T)

glimpse(alc3)
## Rows: 370
## Columns: 35
## $ school     <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP",…
## $ sex        <chr> "F", "F", "F", "F", "F", "M", "M", "F", "M", "M", "F", "F",…
## $ age        <int> 18, 17, 15, 15, 16, 16, 16, 17, 15, 15, 15, 15, 15, 15, 15,…
## $ address    <chr> "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U",…
## $ famsize    <chr> "GT3", "GT3", "LE3", "GT3", "GT3", "LE3", "LE3", "GT3", "LE…
## $ Pstatus    <chr> "A", "T", "T", "T", "T", "T", "T", "A", "A", "T", "T", "T",…
## $ Medu       <int> 4, 1, 1, 4, 3, 4, 2, 4, 3, 3, 4, 2, 4, 4, 2, 4, 4, 3, 3, 4,…
## $ Fedu       <int> 4, 1, 1, 2, 3, 3, 2, 4, 2, 4, 4, 1, 4, 3, 2, 4, 4, 3, 2, 3,…
## $ Mjob       <chr> "at_home", "at_home", "at_home", "health", "other", "servic…
## $ Fjob       <chr> "teacher", "other", "other", "services", "other", "other", …
## $ reason     <chr> "course", "course", "other", "home", "home", "reputation", …
## $ guardian   <chr> "mother", "father", "mother", "mother", "father", "mother",…
## $ traveltime <int> 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, 3, 1, 1,…
## $ studytime  <int> 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 1, 2, 3, 1, 3, 2, 1, 1,…
## $ schoolsup  <chr> "yes", "no", "yes", "no", "no", "no", "no", "yes", "no", "n…
## $ famsup     <chr> "no", "yes", "no", "yes", "yes", "yes", "no", "yes", "yes",…
## $ activities <chr> "no", "no", "no", "yes", "no", "yes", "no", "no", "no", "ye…
## $ nursery    <chr> "yes", "no", "yes", "yes", "yes", "yes", "yes", "yes", "yes…
## $ higher     <chr> "yes", "yes", "yes", "yes", "yes", "yes", "yes", "yes", "ye…
## $ internet   <chr> "no", "yes", "yes", "yes", "no", "yes", "yes", "no", "yes",…
## $ romantic   <chr> "no", "no", "no", "yes", "no", "no", "no", "no", "no", "no"…
## $ famrel     <int> 4, 5, 4, 3, 4, 5, 4, 4, 4, 5, 3, 5, 4, 5, 4, 4, 3, 5, 5, 3,…
## $ freetime   <int> 3, 3, 3, 2, 3, 4, 4, 1, 2, 5, 3, 2, 3, 4, 5, 4, 2, 3, 5, 1,…
## $ goout      <int> 4, 3, 2, 2, 2, 2, 4, 4, 2, 1, 3, 2, 3, 3, 2, 4, 3, 2, 5, 3,…
## $ Dalc       <int> 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1,…
## $ Walc       <int> 1, 1, 3, 1, 2, 2, 1, 1, 1, 1, 2, 1, 3, 2, 1, 2, 2, 1, 4, 3,…
## $ health     <int> 3, 3, 3, 5, 5, 5, 3, 1, 1, 5, 2, 4, 5, 3, 3, 2, 2, 4, 5, 5,…
## $ failures   <int> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0,…
## $ paid       <chr> "no", "no", "yes", "yes", "yes", "yes", "no", "no", "yes", …
## $ absences   <int> 5, 3, 8, 1, 2, 8, 0, 4, 0, 0, 1, 2, 1, 1, 0, 5, 8, 3, 9, 5,…
## $ G1         <int> 2, 7, 10, 14, 8, 14, 12, 8, 16, 13, 12, 10, 13, 11, 14, 16,…
## $ G2         <int> 8, 8, 10, 14, 12, 14, 12, 9, 17, 14, 11, 12, 14, 11, 15, 16…
## $ G3         <int> 8, 8, 11, 14, 12, 14, 12, 10, 18, 14, 12, 12, 13, 12, 16, 1…
## $ alc_use    <dbl> 1.0, 1.0, 2.5, 1.0, 1.5, 1.5, 1.0, 1.0, 1.0, 1.0, 1.5, 1.0,…
## $ high_use   <lgl> FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FALS…

Description:

3

The purpose of your analysis is to study the relationships between high/low alcohol consumption and some of the other variables in the data. To do this, choose 4 interesting variables in the data and for each of them, present your personal hypothesis about their relationships with alcohol consumption. (0-1 point)

For this excercise I study relationship between high and low alcohol consumption gender, education level of parents, and motivation to take higher education.

I am interested to look wheter there is relationship between student’s motivation to take higher education and education level of parents to alcohol consumption.

Variables are following (described in: http://www.archive.ics.uci.edu/dataset/320/student+performance)

7 Medu - mother’s education (numeric: 0 - none, 1 - primary education (4th grade), 2 — 5th to 9th grade, 3 — secondary education or 4 — higher education)

8 Fedu - father’s education (numeric: 0 - none, 1 - primary education (4th grade), 2 — 5th to 9th grade, 3 — secondary education or 4 — higher education)

17 famsup - family educational support (binary: yes or no)

21 higher - wants to take higher education (binary: yes or no)

My personal hypothesis about these variables is that would have low (if any) negative correlation:the higher family’s education level and student’s “education motivation” the lower alchohol consumption.

I also computed mean education level based on both parents education level.

4

Numerically and graphically explore the distributions of your chosen variables and their relationships with alcohol consumption (use for example cross-tabulations, bar plots and box plots).

Comment on your findings and compare the results of your exploration to your previously stated hypotheses. (0-5 points)

First we look how much alcohol is used in different genders. We see that data have 70 male students with high use of alcohol and female students 41.

While looking boxplots of parents’ education and high use of alcohol consumption, I do not see much anything of noteworthy. Father’s education level median is bit lower with male students with high alcohol consumption. But father’s education level median was also lower with female student with low alcohol consumption. I think that boxplot is not the best graphic with this data.

This observations do not really support my hypothesis (but do not refute it necessarily). I also note that would analyze both parents’ education level and alcohol consumption bit better than with mean grade. Unfortunately I ran out of time with this assignment. :(

### bar plots ###

# A plot of alcohol use with gender
alc3 |> 
  group_by(sex) |> 
  count(alc3$high_use)
## # A tibble: 4 × 3
## # Groups:   sex [2]
##   sex   `alc3$high_use`     n
##   <chr> <lgl>           <int>
## 1 F     FALSE             154
## 2 F     TRUE               41
## 3 M     FALSE             105
## 4 M     TRUE               70
g1 <- ggplot(data = alc3, aes(x = high_use))
g1 + geom_bar() + facet_wrap("sex")

### box plots ###

# a plot of high_use and mother's education
g1 <- ggplot(alc3, aes(x = high_use, y = Medu, col = sex))
g1 + geom_boxplot() + ylab("Mother's education")

# a plot of high_use and father's education
g2 <- ggplot(alc3, aes(x = high_use, y = Fedu, col = sex))
g2 + geom_boxplot() + ylab("Father's education")

# a plot of high_use and parents' education (Pedu, mean of Medu ja Fedu) 
alc3 <- mutate(alc3, Pedu = ((Medu + Fedu) / 2))
g3 <- ggplot(alc3, aes(x = high_use, y = Pedu, col = sex))
g3 + geom_boxplot() + ylab("Parents' education")

5

Use logistic regression to statistically explore the relationship between your chosen variables and the binary high/low alcohol consumption variable as the target variable. Present and interpret a summary of the fitted model. Present and interpret the coefficients of the model as odds ratios and provide confidence intervals for them. Interpret the results and compare them to your previously stated hypothesis.

Hint: If your model includes factor variables, see for example the RHDS book or the first answer of this stack exchange thread on how R treats and how you should interpret these variables in the model output (or use some other resource to study this). (0-5 points)

6

Using the variables which, according to your logistic regression model, had a statistical relationship with high/low alcohol consumption, explore the predictive power of you model. Provide a 2x2 cross tabulation of predictions versus the actual values and optionally display a graphic visualizing both the actual values and the predictions. Compute the total proportion of inaccurately classified individuals (= the training error) and comment on all the results. Compare the performance of the model with performance achieved by some simple guessing strategy. (0-3 points)

Bonus: Perform 10-fold cross-validation on your model. Does your model have better test set performance (smaller prediction error using 10-fold cross-validation) compared to the model introduced in the Exercise Set (which had about 0.26 error). Could you find such a model? (0-2 points to compensate any loss of points from the above exercises)

Super-Bonus: Perform cross-validation to compare the performance of different logistic regression models (= different sets of predictors). Start with a very high number of predictors and explore the changes in the training and testing errors as you move to models with less predictors. Draw a graph displaying the trends of both training and testing errors by the number of predictors in the model. (0-4 points to compensate any loss of points from the above exercises)

After completing all the phases above you are ready to submit your Assignment for the review (using the Moodle Workshop below). Have the two links (your GitHub repository and your course diary) ready!


Assignment 4. Analysis exercises (Max 15 points)

1.The data

Explore the structure and the dimensions of the Boston data and describe the dataset briefly, assuming the reader has no previous knowledge of it. Details about the Boston dataset can be seen for example here. (0-1 points)

The Housing Values in Suburbs of Boston.

  • Dataset contains 14 colums including crime rate (mean 3.6, median 0.3), pupil-teacher ratio (mean 18.46, median 19.05) and non-retail business acres (mean 408, median 330) per town. Other interesting columns are for example distance from Boston center (mean 3.795, median 3.207), property tax-rate (mean 408.2, median 330) and amount of population of lower status (mean 12.65, median 11.36). There is no missingness in the data and rows 506 (towns?) in the data. All the data contain numerical variables, chas is binary.

  • Full list of columns are following (from here).

    • crim per capita crime rate by town.

    • zn proportion of residential land zoned for lots over 25,000 sq.ft.

    • indus proportion of non-retail business acres per town.

    • chas Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).

    • nox nitrogen oxides concentration (parts per 10 million).

    • rm average number of rooms per dwelling.

    • age proportion of owner-occupied units built prior to 1940.

    • dis weighted mean of distances to five Boston employment centres.

    • rad index of accessibility to radial highways.

    • tax full-value property-tax rate per $10,000.

    • ptratio pupil-teacher ratio by town.

    • black 1000(Bk−0.63)21000(Bk−0.63)2 where BkBk is the proportion of blacks by town.

    • lstat lower status of the population (percent).

  • medv median value of owner-occupied homes in $1000s.

#install.packages("MASS")
library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
library(finalfit)
library(dplyr)
library(corrplot)
## corrplot 0.92 loaded
data("Boston")
glimpse(Boston)
## Rows: 506
## Columns: 14
## $ crim    <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, 0.08829,…
## $ zn      <dbl> 18.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.5, 12.5, 12.5, 12.5, 12.5, 1…
## $ indus   <dbl> 2.31, 7.07, 7.07, 2.18, 2.18, 2.18, 7.87, 7.87, 7.87, 7.87, 7.…
## $ chas    <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ nox     <dbl> 0.538, 0.469, 0.469, 0.458, 0.458, 0.458, 0.524, 0.524, 0.524,…
## $ rm      <dbl> 6.575, 6.421, 7.185, 6.998, 7.147, 6.430, 6.012, 6.172, 5.631,…
## $ age     <dbl> 65.2, 78.9, 61.1, 45.8, 54.2, 58.7, 66.6, 96.1, 100.0, 85.9, 9…
## $ dis     <dbl> 4.0900, 4.9671, 4.9671, 6.0622, 6.0622, 6.0622, 5.5605, 5.9505…
## $ rad     <int> 1, 2, 2, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4,…
## $ tax     <dbl> 296, 242, 242, 222, 222, 222, 311, 311, 311, 311, 311, 311, 31…
## $ ptratio <dbl> 15.3, 17.8, 17.8, 18.7, 18.7, 18.7, 15.2, 15.2, 15.2, 15.2, 15…
## $ black   <dbl> 396.90, 396.90, 392.83, 394.63, 396.90, 394.12, 395.60, 396.90…
## $ lstat   <dbl> 4.98, 9.14, 4.03, 2.94, 5.33, 5.21, 12.43, 19.15, 29.93, 17.10…
## $ medv    <dbl> 24.0, 21.6, 34.7, 33.4, 36.2, 28.7, 22.9, 27.1, 16.5, 18.9, 15…
ff_glimpse(Boston)
## $Continuous
##           label var_type   n missing_n missing_percent  mean    sd   min
## crim       crim    <dbl> 506         0             0.0   3.6   8.6   0.0
## zn           zn    <dbl> 506         0             0.0  11.4  23.3   0.0
## indus     indus    <dbl> 506         0             0.0  11.1   6.9   0.5
## chas       chas    <int> 506         0             0.0   0.1   0.3   0.0
## nox         nox    <dbl> 506         0             0.0   0.6   0.1   0.4
## rm           rm    <dbl> 506         0             0.0   6.3   0.7   3.6
## age         age    <dbl> 506         0             0.0  68.6  28.1   2.9
## dis         dis    <dbl> 506         0             0.0   3.8   2.1   1.1
## rad         rad    <int> 506         0             0.0   9.5   8.7   1.0
## tax         tax    <dbl> 506         0             0.0 408.2 168.5 187.0
## ptratio ptratio    <dbl> 506         0             0.0  18.5   2.2  12.6
## black     black    <dbl> 506         0             0.0 356.7  91.3   0.3
## lstat     lstat    <dbl> 506         0             0.0  12.7   7.1   1.7
## medv       medv    <dbl> 506         0             0.0  22.5   9.2   5.0
##         quartile_25 median quartile_75   max
## crim            0.1    0.3         3.7  89.0
## zn              0.0    0.0        12.5 100.0
## indus           5.2    9.7        18.1  27.7
## chas            0.0    0.0         0.0   1.0
## nox             0.4    0.5         0.6   0.9
## rm              5.9    6.2         6.6   8.8
## age            45.0   77.5        94.1 100.0
## dis             2.1    3.2         5.2  12.1
## rad             4.0    5.0        24.0  24.0
## tax           279.0  330.0       666.0 711.0
## ptratio        17.4   19.1        20.2  22.0
## black         375.4  391.4       396.2 396.9
## lstat           6.9   11.4        17.0  38.0
## medv           17.0   21.2        25.0  50.0
## 
## $Categorical
## data frame with 0 columns and 506 rows
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

2. Graphical overview

Show a graphical overview of the data and show summaries of the variables in the data. Describe and interpret the outputs, commenting on the distributions of the variables and the relationships between them. (0-2 points)

Ggpairs. First I tried to the ggpairs plot matrix from previous weeks. The image is so bad that I had hard time to figuring out whats going on there. I added “proportions = auto” which helped little. Ggpairs show interesting correlations. For example variables listed there is positive correlation between crime rate and low status population ratio and negative correlation between crime rate and population of afro-americans.

Pairs. This plot gives same information than the last one. Ggpairs is bit more helpful when there is the correlation indicator.

Corrplot. This is gives same information than Ggpairs. Probably I will use corrplot in future than ggpairs because this is more tidy and it is easier to see which variables have higher correlation. For example dis (distanse from center) has negative correlation between indus (proportion of non-retail business acres), nox (itrogen oxides concentration) and age ( proportion of owner-occupied units built prior to 1940). Crime rate has positive correlations with rad (index of accessibility to radial highways) and tax (full-value property-tax rate per $10,000).

library(GGally) 
library(ggplot2)

# a plot matrix with ggpairs()

p2 <- ggpairs(Boston, mapping = aes(alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)), proportions = "auto")
p2

pairs(Boston)

# calculate the correlation matrix and round it
cor_matrix <- cor(Boston) 

# print the correlation matrix
cor_matrix
##                crim          zn       indus         chas         nox
## crim     1.00000000 -0.20046922  0.40658341 -0.055891582  0.42097171
## zn      -0.20046922  1.00000000 -0.53382819 -0.042696719 -0.51660371
## indus    0.40658341 -0.53382819  1.00000000  0.062938027  0.76365145
## chas    -0.05589158 -0.04269672  0.06293803  1.000000000  0.09120281
## nox      0.42097171 -0.51660371  0.76365145  0.091202807  1.00000000
## rm      -0.21924670  0.31199059 -0.39167585  0.091251225 -0.30218819
## age      0.35273425 -0.56953734  0.64477851  0.086517774  0.73147010
## dis     -0.37967009  0.66440822 -0.70802699 -0.099175780 -0.76923011
## rad      0.62550515 -0.31194783  0.59512927 -0.007368241  0.61144056
## tax      0.58276431 -0.31456332  0.72076018 -0.035586518  0.66802320
## ptratio  0.28994558 -0.39167855  0.38324756 -0.121515174  0.18893268
## black   -0.38506394  0.17552032 -0.35697654  0.048788485 -0.38005064
## lstat    0.45562148 -0.41299457  0.60379972 -0.053929298  0.59087892
## medv    -0.38830461  0.36044534 -0.48372516  0.175260177 -0.42732077
##                  rm         age         dis          rad         tax    ptratio
## crim    -0.21924670  0.35273425 -0.37967009  0.625505145  0.58276431  0.2899456
## zn       0.31199059 -0.56953734  0.66440822 -0.311947826 -0.31456332 -0.3916785
## indus   -0.39167585  0.64477851 -0.70802699  0.595129275  0.72076018  0.3832476
## chas     0.09125123  0.08651777 -0.09917578 -0.007368241 -0.03558652 -0.1215152
## nox     -0.30218819  0.73147010 -0.76923011  0.611440563  0.66802320  0.1889327
## rm       1.00000000 -0.24026493  0.20524621 -0.209846668 -0.29204783 -0.3555015
## age     -0.24026493  1.00000000 -0.74788054  0.456022452  0.50645559  0.2615150
## dis      0.20524621 -0.74788054  1.00000000 -0.494587930 -0.53443158 -0.2324705
## rad     -0.20984667  0.45602245 -0.49458793  1.000000000  0.91022819  0.4647412
## tax     -0.29204783  0.50645559 -0.53443158  0.910228189  1.00000000  0.4608530
## ptratio -0.35550149  0.26151501 -0.23247054  0.464741179  0.46085304  1.0000000
## black    0.12806864 -0.27353398  0.29151167 -0.444412816 -0.44180801 -0.1773833
## lstat   -0.61380827  0.60233853 -0.49699583  0.488676335  0.54399341  0.3740443
## medv     0.69535995 -0.37695457  0.24992873 -0.381626231 -0.46853593 -0.5077867
##               black      lstat       medv
## crim    -0.38506394  0.4556215 -0.3883046
## zn       0.17552032 -0.4129946  0.3604453
## indus   -0.35697654  0.6037997 -0.4837252
## chas     0.04878848 -0.0539293  0.1752602
## nox     -0.38005064  0.5908789 -0.4273208
## rm       0.12806864 -0.6138083  0.6953599
## age     -0.27353398  0.6023385 -0.3769546
## dis      0.29151167 -0.4969958  0.2499287
## rad     -0.44441282  0.4886763 -0.3816262
## tax     -0.44180801  0.5439934 -0.4685359
## ptratio -0.17738330  0.3740443 -0.5077867
## black    1.00000000 -0.3660869  0.3334608
## lstat   -0.36608690  1.0000000 -0.7376627
## medv     0.33346082 -0.7376627  1.0000000
# visualize the correlation matrix
library(corrplot)
corrplot(cor_matrix, method="circle")

3. Standardize the dataset

Standardize the dataset and print out summaries of the scaled data. How did the variables change? Create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate). Use the quantiles as the break points in the categorical variable. Drop the old crime rate variable from the dataset. Divide the dataset to train and test sets, so that 80% of the data belongs to the train set. (0-2 points)

Standardize & scale the dataset. How did the variables change? First I notice that crime rate dropped from mean 3.61 and median 0.25651 to mean 0 and median -0.390280. Max values in every variable dropped significantly.

I tried corrplot out of curiosity and see no changes (no ****, sherlock).

# center and standardize variables
boston_scaled <- as.data.frame(scale(Boston))
boston_scaled$crim <- as.numeric(boston_scaled$crim)
  
# summaries of the scaled variables
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865
glimpse(boston_scaled)
## Rows: 506
## Columns: 14
## $ crim    <dbl> -0.4193669, -0.4169267, -0.4169290, -0.4163384, -0.4120741, -0…
## $ zn      <dbl> 0.28454827, -0.48724019, -0.48724019, -0.48724019, -0.48724019…
## $ indus   <dbl> -1.2866362, -0.5927944, -0.5927944, -1.3055857, -1.3055857, -1…
## $ chas    <dbl> -0.2723291, -0.2723291, -0.2723291, -0.2723291, -0.2723291, -0…
## $ nox     <dbl> -0.1440749, -0.7395304, -0.7395304, -0.8344581, -0.8344581, -0…
## $ rm      <dbl> 0.4132629, 0.1940824, 1.2814456, 1.0152978, 1.2273620, 0.20689…
## $ age     <dbl> -0.11989477, 0.36680343, -0.26554897, -0.80908783, -0.51067434…
## $ dis     <dbl> 0.1400749840, 0.5566090496, 0.5566090496, 1.0766711351, 1.0766…
## $ rad     <dbl> -0.9818712, -0.8670245, -0.8670245, -0.7521778, -0.7521778, -0…
## $ tax     <dbl> -0.6659492, -0.9863534, -0.9863534, -1.1050216, -1.1050216, -1…
## $ ptratio <dbl> -1.4575580, -0.3027945, -0.3027945, 0.1129203, 0.1129203, 0.11…
## $ black   <dbl> 0.4406159, 0.4406159, 0.3960351, 0.4157514, 0.4406159, 0.41016…
## $ lstat   <dbl> -1.07449897, -0.49195252, -1.20753241, -1.36017078, -1.0254866…
## $ medv    <dbl> 0.15952779, -0.10142392, 1.32293748, 1.18158864, 1.48603229, 0…
#cor_matrix2 <- cor(boston_scaled) 
#cor_matrix2
#corrplot(cor_matrix2, method="circle")

Create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate). Use the quantiles as the break points in the categorical variable.

bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
# created a categorical variable 'crime': low, med_low, med_high and high
crime <- cut(boston_scaled$crim, breaks = bins, labels = c("low", "med_low", "med_high", "high"), include.lowest = TRUE)

# looking at the table of the new factor crime
crime
##   [1] low      low      low      low      low      low      med_low  med_low 
##   [9] med_low  med_low  med_low  med_low  med_low  med_high med_high med_high
##  [17] med_high med_high med_high med_high med_high med_high med_high med_high
##  [25] med_high med_high med_high med_high med_high med_high med_high med_high
##  [33] med_high med_high med_high low      med_low  low      med_low  low     
##  [41] low      med_low  med_low  med_low  med_low  med_low  med_low  med_low 
##  [49] med_low  med_low  med_low  low      low      low      low      low     
##  [57] low      low      med_low  med_low  med_low  med_low  med_low  med_low 
##  [65] low      low      low      low      med_low  med_low  med_low  med_low 
##  [73] med_low  med_low  low      med_low  med_low  med_low  low      med_low 
##  [81] low      low      low      low      low      low      low      low     
##  [89] low      low      low      low      low      low      low      med_low 
##  [97] med_low  med_low  low      low      med_low  med_low  med_low  med_low 
## [105] med_low  med_low  med_low  med_low  med_low  med_high med_low  med_low 
## [113] med_low  med_low  med_low  med_low  med_low  med_low  med_low  med_low 
## [121] low      low      med_low  med_low  med_low  med_low  med_high med_high
## [129] med_high med_high med_high med_high med_high med_high med_high med_high
## [137] med_high med_high med_low  med_high med_high med_high med_high high    
## [145] med_high med_high med_high med_high med_high med_high med_high med_high
## [153] med_high med_high med_high med_high med_high med_high med_high med_high
## [161] med_high med_high med_high med_high med_high med_high med_high med_high
## [169] med_high med_high med_high med_high med_low  med_low  med_low  low     
## [177] low      low      low      low      low      low      med_low  med_low 
## [185] med_low  low      low      low      med_low  med_low  med_low  low     
## [193] med_low  low      low      low      low      low      low      low     
## [201] low      low      low      low      low      med_low  med_low  med_low 
## [209] med_low  med_high med_low  med_high med_low  med_low  med_high med_low 
## [217] low      low      med_low  med_low  med_high med_high med_high med_high
## [225] med_high med_high med_high med_high med_high med_high med_high med_high
## [233] med_high med_high med_high med_high med_high med_high med_low  med_low 
## [241] med_low  med_low  med_low  med_low  med_low  med_low  med_high med_low 
## [249] med_low  med_low  med_low  med_low  med_low  med_high low      low     
## [257] low      med_high med_high med_high med_high med_high med_high med_high
## [265] med_high med_high med_high med_high med_high med_low  med_high med_low 
## [273] med_low  med_low  low      med_low  med_low  low      low      med_low 
## [281] low      low      low      low      low      low      low      low     
## [289] low      low      low      low      low      med_low  low      med_low 
## [297] low      med_low  low      low      low      low      med_low  med_low 
## [305] low      low      low      low      med_high med_high med_high med_high
## [313] med_high med_high med_high med_low  med_high med_low  med_high med_high
## [321] med_low  med_low  med_high med_high med_high med_low  med_high med_low 
## [329] low      low      low      low      low      low      low      low     
## [337] low      low      low      low      low      low      low      low     
## [345] low      low      low      low      low      low      low      low     
## [353] low      low      low      med_low  high     high     high     high    
## [361] high     high     high     high     med_high high     high     high    
## [369] high     high     high     high     high     high     high     high    
## [377] high     high     high     high     high     high     high     high    
## [385] high     high     high     high     high     high     high     high    
## [393] high     high     high     high     high     high     high     high    
## [401] high     high     high     high     high     high     high     high    
## [409] high     high     high     high     high     high     high     high    
## [417] high     high     high     high     high     high     high     high    
## [425] high     high     high     high     high     high     high     high    
## [433] high     high     high     high     high     high     high     high    
## [441] high     high     high     high     high     high     high     high    
## [449] high     high     high     high     high     high     high     high    
## [457] high     high     high     high     high     high     high     high    
## [465] high     med_high high     high     high     high     high     high    
## [473] med_high high     high     high     high     high     high     high    
## [481] high     high     high     med_high med_high med_high high     high    
## [489] med_low  med_low  med_low  med_low  med_low  med_low  med_high med_low 
## [497] med_high med_high med_low  med_low  med_low  low      low      low     
## [505] med_low  low     
## Levels: low med_low med_high high

Drop the old crime rate variable from the dataset

# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

summary(boston_scaled)
##        zn               indus              chas              nox         
##  Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723   Min.   :-1.4644  
##  1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723   1st Qu.:-0.9121  
##  Median :-0.48724   Median :-0.2109   Median :-0.2723   Median :-0.1441  
##  Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723   3rd Qu.: 0.5981  
##  Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648   Max.   : 2.7296  
##        rm               age               dis               rad         
##  Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658   Min.   :-0.9819  
##  1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049   1st Qu.:-0.6373  
##  Median :-0.1084   Median : 0.3171   Median :-0.2790   Median :-0.5225  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617   3rd Qu.: 1.6596  
##  Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566   Max.   : 1.6596  
##       tax             ptratio            black             lstat        
##  Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033   Min.   :-1.5296  
##  1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049   1st Qu.:-0.7986  
##  Median :-0.4642   Median : 0.2746   Median : 0.3808   Median :-0.1811  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332   3rd Qu.: 0.6024  
##  Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406   Max.   : 3.5453  
##       medv              crime    
##  Min.   :-1.9063   low     :127  
##  1st Qu.:-0.5989   med_low :126  
##  Median :-0.1449   med_high:126  
##  Mean   : 0.0000   high    :127  
##  3rd Qu.: 0.2683                 
##  Max.   : 2.9865

Divide the dataset to train and test sets, so that 80% of the data belongs to the train set.

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]

4. Linear discriminant analysis

Fit the linear discriminant analysis on the train set. Use the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables. Draw the LDA (bi)plot. (0-3 points)

# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2549505 0.2475248 0.2400990 0.2574257 
## 
## Group means:
##                  zn      indus         chas        nox          rm        age
## low       0.8900317 -0.8965236 -0.157656245 -0.8684159  0.37468317 -0.8705502
## med_low  -0.1142091 -0.2796035  0.003267949 -0.5502877 -0.14789617 -0.3562098
## med_high -0.3979497  0.1951486  0.255323541  0.4132138  0.09310499  0.4335692
## high     -0.4872402  1.0170690 -0.045188669  1.0337255 -0.45879259  0.8063621
##                 dis        rad        tax     ptratio       black       lstat
## low       0.8465332 -0.6875068 -0.7438324 -0.42163424  0.39133889 -0.75508275
## med_low   0.3856159 -0.5454537 -0.5072898 -0.07368943  0.32507599 -0.15159696
## med_high -0.3922782 -0.4029017 -0.3077424 -0.33898502  0.08873806  0.02570132
## high     -0.8659884  1.6386213  1.5144083  0.78135074 -0.90415075  0.92848917
##                  medv
## low       0.472627610
## med_low   0.003391685
## med_high  0.177014243
## high     -0.724216109
## 
## Coefficients of linear discriminants:
##                 LD1          LD2         LD3
## zn       0.08487923  0.705863633 -1.00242575
## indus    0.02587752 -0.223874819  0.40542436
## chas    -0.07435976 -0.106912072  0.11363343
## nox      0.32447053 -0.714468192 -1.23014384
## rm      -0.10870281 -0.105941705 -0.17919263
## age      0.22250644 -0.380541808 -0.12481545
## dis     -0.10670735 -0.313670764  0.42215704
## rad      3.09560845  1.040935191 -0.02272784
## tax      0.10087910 -0.167669598  0.43408928
## ptratio  0.07701575  0.007071431 -0.21673883
## black   -0.14303632  0.024817206  0.10445412
## lstat    0.22572581 -0.200432956  0.38865822
## medv     0.18174986 -0.364030622 -0.10474777
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9506 0.0376 0.0119
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  graphics::arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results (select both lines and execute them at the same time!)
plot(lda.fit, dimen = 2)
lda.arrows(lda.fit, myscale = 1)

5. Predict & cross tabulate

Save the crime categories from the test set and then remove the categorical crime variable from the test dataset. Then predict the classes with the LDA model on the test data. Cross tabulate the results with the crime categories from the test set. Comment on the results. (0-3 points)

correct_classes <- test$crime
test <- dplyr::select(test, -crime)

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       17       6        1    0
##   med_low    6      13        7    0
##   med_high   0       9       19    1
##   high       0       0        0   23

6. Standardize the dataset

Reload the Boston dataset and standardize the dataset (we did not do this in the Exercise Set, but you should scale the variables to get comparable distances).

data("Boston")

# center and standardize variables
boston_scaled <- scale(Boston)
  
# summaries of the scaled variables
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)

Calculate the distances between the observations.

# euclidean distance matrix
dist_eu <- dist(Boston)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.119  85.624 170.539 226.315 371.950 626.047
# manhattan distance matrix
dist_man <- dist(Boston, method = "manhattan")

# look at the summary of the distances
summary(dist_man)
##     Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
##    2.016  149.145  279.505  342.899  509.707 1198.265

Run k-means algorithm on the dataset.

km <- kmeans(Boston, centers = 4)

# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)

# k-means clustering
km <- kmeans(Boston, centers = 4)

# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)

####

# k-means clustering
km <- kmeans(Boston, centers = 3)

# plot the Boston dataset with clusters
pairs(Boston[c("rm", "age", "dis", "crim")], col = km$cluster)

Investigate what is the optimal number of clusters and run the algorithm again.

set.seed(123)

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')
## Warning: `qplot()` was deprecated in ggplot2 3.4.0.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.

# k-means clustering
km <- kmeans(Boston, centers = 10)

# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)

The optimal number of clusters is when the total WCSS drops radically. In this example twcss drops when amount of clusters is two. I run the algorithm again with this.

k_max <- 2

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})

Visualize the clusters (for example with the pairs() or ggpairs() functions, where the clusters are separated with colors) and interpret the results. (0-4 points)

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

# k-means clustering
km <- kmeans(Boston, centers = 2)

# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)


Assignment 5: Data analysis

library(readr)
human <- read_csv("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/human2.csv")
## Rows: 155 Columns: 9
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (1): Country
## dbl (8): Edu2.FM, Labo.FM, Life.Exp, Edu.Exp, GNI, Mat.Mor, Ado.Birth, Parli.F
## 
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.

1 Data

Move the country names to rownames (see Exercise 5.5). Show a graphical overview of the data and show summaries of the variables in the data. Describe and interpret the outputs, commenting on the distributions of the variables and the relationships between them. (0-3 points)

library(GGally)

# Move the country names to rownames
library(tibble)
human_ <- column_to_rownames(human, "Country")
head(human_)
##               Edu2.FM   Labo.FM Life.Exp Edu.Exp   GNI Mat.Mor Ado.Birth
## Norway      1.0072389 0.8908297     81.6    17.5 64992       4       7.8
## Australia   0.9968288 0.8189415     82.4    20.2 42261       6      12.1
## Switzerland 0.9834369 0.8251001     83.0    15.8 56431       6       1.9
## Denmark     0.9886128 0.8840361     80.2    18.7 44025       5       5.1
## Netherlands 0.9690608 0.8286119     81.6    17.9 45435       6       6.2
## Germany     0.9927835 0.8072289     80.9    16.5 43919       7       3.8
##             Parli.F
## Norway         39.6
## Australia      30.5
## Switzerland    28.5
## Denmark        38.0
## Netherlands    36.9
## Germany        36.9
# graphical overview 
ggpairs(human_, progress = FALSE)

#show summaries of the variables in the data.
summary(human_)
##     Edu2.FM          Labo.FM          Life.Exp        Edu.Exp     
##  Min.   :0.1717   Min.   :0.1857   Min.   :49.00   Min.   : 5.40  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:66.30   1st Qu.:11.25  
##  Median :0.9375   Median :0.7535   Median :74.20   Median :13.50  
##  Mean   :0.8529   Mean   :0.7074   Mean   :71.65   Mean   :13.18  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:77.25   3rd Qu.:15.20  
##  Max.   :1.4967   Max.   :1.0380   Max.   :83.50   Max.   :20.20  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50
str(human_)
## 'data.frame':    155 obs. of  8 variables:
##  $ Edu2.FM  : num  1.007 0.997 0.983 0.989 0.969 ...
##  $ Labo.FM  : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Life.Exp : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ Edu.Exp  : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ GNI      : num  64992 42261 56431 44025 45435 ...
##  $ Mat.Mor  : num  4 6 6 5 6 7 9 28 11 8 ...
##  $ Ado.Birth: num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ Parli.F  : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
glimpse(human_)
## Rows: 155
## Columns: 8
## $ Edu2.FM   <dbl> 1.0072389, 0.9968288, 0.9834369, 0.9886128, 0.9690608, 0.992…
## $ Labo.FM   <dbl> 0.8908297, 0.8189415, 0.8251001, 0.8840361, 0.8286119, 0.807…
## $ Life.Exp  <dbl> 81.6, 82.4, 83.0, 80.2, 81.6, 80.9, 80.9, 79.1, 82.0, 81.8, …
## $ Edu.Exp   <dbl> 17.5, 20.2, 15.8, 18.7, 17.9, 16.5, 18.6, 16.5, 15.9, 19.2, …
## $ GNI       <dbl> 64992, 42261, 56431, 44025, 45435, 43919, 39568, 52947, 4215…
## $ Mat.Mor   <dbl> 4, 6, 6, 5, 6, 7, 9, 28, 11, 8, 6, 4, 8, 4, 27, 2, 11, 6, 6,…
## $ Ado.Birth <dbl> 7.8, 12.1, 1.9, 5.1, 6.2, 3.8, 8.2, 31.0, 14.5, 25.3, 6.0, 6…
## $ Parli.F   <dbl> 39.6, 30.5, 28.5, 38.0, 36.9, 36.9, 19.9, 19.4, 28.2, 31.4, …
# Describe and interpret the outputs, commenting on the distributions of the variables and the relationships between them. 

# Access corrplot
library(corrplot)

# compute the correlation matrix and visualize it with corrplot
cor(human_)
##                Edu2.FM      Labo.FM   Life.Exp     Edu.Exp         GNI
## Edu2.FM    1.000000000  0.009564039  0.5760299  0.59325156  0.43030485
## Labo.FM    0.009564039  1.000000000 -0.1400125  0.04732183 -0.02173971
## Life.Exp   0.576029853 -0.140012504  1.0000000  0.78943917  0.62666411
## Edu.Exp    0.593251562  0.047321827  0.7894392  1.00000000  0.62433940
## GNI        0.430304846 -0.021739705  0.6266641  0.62433940  1.00000000
## Mat.Mor   -0.660931770  0.240461075 -0.8571684 -0.73570257 -0.49516234
## Ado.Birth -0.529418415  0.120158862 -0.7291774 -0.70356489 -0.55656208
## Parli.F    0.078635285  0.250232608  0.1700863  0.20608156  0.08920818
##              Mat.Mor  Ado.Birth     Parli.F
## Edu2.FM   -0.6609318 -0.5294184  0.07863528
## Labo.FM    0.2404611  0.1201589  0.25023261
## Life.Exp  -0.8571684 -0.7291774  0.17008631
## Edu.Exp   -0.7357026 -0.7035649  0.20608156
## GNI       -0.4951623 -0.5565621  0.08920818
## Mat.Mor    1.0000000  0.7586615 -0.08944000
## Ado.Birth  0.7586615  1.0000000 -0.07087810
## Parli.F   -0.0894400 -0.0708781  1.00000000
cor(human_) |> corrplot()

2. PCA

Perform principal component analysis (PCA) on the raw (non-standardized) human data. Show the variability captured by the principal components. Draw a biplot displaying the observations by the first two principal components (PC1 coordinate in x-axis, PC2 coordinate in y-axis), along with arrows representing the original variables. (0-2 points)

library(GGally)

pca_human_non_standdd <- prcomp(human_)
pca_human_non_standdd
## Standard deviations (1, .., p=8):
## [1] 1.854416e+04 1.855219e+02 2.518701e+01 1.145441e+01 3.766241e+00
## [6] 1.565912e+00 1.912052e-01 1.591112e-01
## 
## Rotation (n x k) = (8 x 8):
##                     PC1           PC2           PC3           PC4           PC5
## Edu2.FM   -5.607472e-06  0.0006713951 -3.412027e-05 -2.736326e-04 -0.0022935252
## Labo.FM    2.331945e-07 -0.0002819357  5.302884e-04 -4.692578e-03  0.0022190154
## Life.Exp  -2.815823e-04  0.0283150248  1.294971e-02 -6.752684e-02  0.9865644425
## Edu.Exp   -9.562910e-05  0.0075529759  1.427664e-02 -3.313505e-02  0.1431180282
## GNI       -9.999832e-01 -0.0057723054 -5.156742e-04  4.932889e-05 -0.0001135863
## Mat.Mor    5.655734e-03 -0.9916320120  1.260302e-01 -6.100534e-03  0.0266373214
## Ado.Birth  1.233961e-03 -0.1255502723 -9.918113e-01  5.301595e-03  0.0188618600
## Parli.F   -5.526460e-05  0.0032317269 -7.398331e-03 -9.971232e-01 -0.0716401914
##                     PC6           PC7           PC8
## Edu2.FM    2.180183e-02  6.998623e-01  7.139410e-01
## Labo.FM    3.264423e-02  7.132267e-01 -7.001533e-01
## Life.Exp  -1.453515e-01  5.380452e-03  2.281723e-03
## Edu.Exp    9.882477e-01 -3.826887e-02  7.776451e-03
## GNI       -2.711698e-05 -8.075191e-07 -1.176762e-06
## Mat.Mor    1.695203e-03  1.355518e-04  8.371934e-04
## Ado.Birth  1.273198e-02 -8.641234e-05 -1.707885e-04
## Parli.F   -2.309896e-02 -2.642548e-03  2.680113e-03
# draw a biplot of the principal component representation and the original variables
biplot(pca_human_non_standdd, choices = 1:2)
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

biplot(pca_human_non_standdd, choices = 1:2, cex = c(0.8, 1))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

biplot(pca_human_non_standdd, choices = 1:2, cex = c(0.40, 0.60), col = c("grey40", "deeppink2")) # latter affects on vectors
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

biplot(pca_human_non_standdd, choices = 1:2, cex = c(0.20, 0.60), col = c("grey40", "deeppink2")) # latter affects on vectors
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

3. PCA w Standardized variables

Standardize the variables in the human data and repeat the above analysis.

human_stddd <- scale(column_to_rownames(human, "Country"))
pca_human_stddd <- prcomp(human_stddd)

# draw a biplot of the principal component representation and the original variables
biplot(pca_human_stddd, choices = 1:2)

biplot(pca_human_stddd, choices = 1:2, cex = c(0.40, 0.60), col = c("grey40", "deeppink2")) # latter affects on vectors

### compare pca_human_non_standdd and pca_human_stddd
biplot(pca_human_non_standdd, choices = 1:2, cex = c(0.20, 0.60), col = c("grey40", "deeppink2")) # latter affects on vectors
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

biplot(pca_human_stddd, choices = 1:2, cex = c(0.20, 0.60), col = c("grey40", "deeppink2")) # latter affects on vectors

Interpret the results of both analysis (with and without standardizing). Are the results different? Why or why not? Include captions (brief descriptions) in your plots where you describe the results by using not just your variable names, but the actual phenomena they relate to. (0-4 points)

Interpretation:

4. Personal interpretations of the first two principal component dimensions

Give your personal interpretations of the first two principal component dimensions based on the biplot drawn after PCA on the standardized human data. (0-2 points)

5. Tea & FactoMineR

The tea data comes from the FactoMineR package and it is measured with a questionnaire on tea: 300 individuals were asked how they drink tea (18 questions) and what are their product’s perception (12 questions). In addition, some personal details were asked (4 questions).
Load the tea dataset and convert its character variables to factors:

tea <- read.csv("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/tea.csv", stringsAsFactors = TRUE)


Explore the data briefly: look at the structure and the dimensions of the data. Use View(tea) to browse its contents, and visualize the data.

library(dplyr)
library(tidyr)
library(ggplot2)


# Explore the data briefly: look at the structure and the dimensions of the data. Use View(tea) to browse its contents
tea <- read.csv("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/tea.csv", stringsAsFactors = TRUE)
summary(tea)
##          breakfast           tea.time          evening          lunch    
##  breakfast    :144   Not.tea time:131   evening    :103   lunch    : 44  
##  Not.breakfast:156   tea time    :169   Not.evening:197   Not.lunch:256  
##                                                                          
##                                                                          
##                                                                          
##                                                                          
##                                                                          
##         dinner           always          home           work    
##  dinner    : 21   always    :103   home    :291   Not.work:213  
##  Not.dinner:279   Not.always:197   Not.home:  9   work    : 87  
##                                                                 
##                                                                 
##                                                                 
##                                                                 
##                                                                 
##         tearoom           friends          resto          pub     
##  Not.tearoom:242   friends    :196   Not.resto:221   Not.pub:237  
##  tearoom    : 58   Not.friends:104   resto    : 79   pub    : 63  
##                                                                   
##                                                                   
##                                                                   
##                                                                   
##                                                                   
##         Tea         How           sugar                     how     
##  black    : 74   alone:195   No.sugar:155   tea bag           :170  
##  Earl Grey:193   lemon: 33   sugar   :145   tea bag+unpackaged: 94  
##  green    : 33   milk : 63                  unpackaged        : 36  
##                  other:  9                                          
##                                                                     
##                                                                     
##                                                                     
##                   where                 price          age        sex    
##  chain store         :192   p_branded      : 95   Min.   :15.00   F:178  
##  chain store+tea shop: 78   p_cheap        :  7   1st Qu.:23.00   M:122  
##  tea shop            : 30   p_private label: 21   Median :32.00          
##                             p_unknown      : 12   Mean   :37.05          
##                             p_upscale      : 53   3rd Qu.:48.00          
##                             p_variable     :112   Max.   :90.00          
##                                                                          
##            SPC               Sport       age_Q          frequency  
##  employee    :59   Not.sportsman:121   +60  :38   +2/day     :127  
##  middle      :40   sportsman    :179   15-24:92   1 to 2/week: 44  
##  non-worker  :64                       25-34:69   1/day      : 95  
##  other worker:20                       35-44:40   3 to 6/week: 34  
##  senior      :35                       45-59:61                    
##  student     :70                                                   
##  workman     :12                                                   
##              escape.exoticism           spirituality        healthy   
##  escape-exoticism    :142     Not.spirituality:206   healthy    :210  
##  Not.escape-exoticism:158     spirituality    : 94   Not.healthy: 90  
##                                                                       
##                                                                       
##                                                                       
##                                                                       
##                                                                       
##          diuretic             friendliness            iron.absorption
##  diuretic    :174   friendliness    :242   iron absorption    : 31   
##  Not.diuretic:126   Not.friendliness: 58   Not.iron absorption:269   
##                                                                      
##                                                                      
##                                                                      
##                                                                      
##                                                                      
##          feminine             sophisticated        slimming          exciting  
##  feminine    :129   Not.sophisticated: 85   No.slimming:255   exciting   :116  
##  Not.feminine:171   sophisticated    :215   slimming   : 45   No.exciting:184  
##                                                                                
##                                                                                
##                                                                                
##                                                                                
##                                                                                
##         relaxing              effect.on.health
##  No.relaxing:113   effect on health   : 66    
##  relaxing   :187   No.effect on health:234    
##                                               
##                                               
##                                               
##                                               
## 
str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "+60","15-24",..: 4 5 5 2 5 2 4 4 4 4 ...
##  $ frequency       : Factor w/ 4 levels "+2/day","1 to 2/week",..: 3 3 1 3 1 3 4 2 1 1 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
dim(tea)
## [1] 300  36
glimpse(tea)
## Rows: 300
## Columns: 36
## $ breakfast        <fct> breakfast, breakfast, Not.breakfast, Not.breakfast, b…
## $ tea.time         <fct> Not.tea time, Not.tea time, tea time, Not.tea time, N…
## $ evening          <fct> Not.evening, Not.evening, evening, Not.evening, eveni…
## $ lunch            <fct> Not.lunch, Not.lunch, Not.lunch, Not.lunch, Not.lunch…
## $ dinner           <fct> Not.dinner, Not.dinner, dinner, dinner, Not.dinner, d…
## $ always           <fct> Not.always, Not.always, Not.always, Not.always, alway…
## $ home             <fct> home, home, home, home, home, home, home, home, home,…
## $ work             <fct> Not.work, Not.work, work, Not.work, Not.work, Not.wor…
## $ tearoom          <fct> Not.tearoom, Not.tearoom, Not.tearoom, Not.tearoom, N…
## $ friends          <fct> Not.friends, Not.friends, friends, Not.friends, Not.f…
## $ resto            <fct> Not.resto, Not.resto, resto, Not.resto, Not.resto, No…
## $ pub              <fct> Not.pub, Not.pub, Not.pub, Not.pub, Not.pub, Not.pub,…
## $ Tea              <fct> black, black, Earl Grey, Earl Grey, Earl Grey, Earl G…
## $ How              <fct> alone, milk, alone, alone, alone, alone, alone, milk,…
## $ sugar            <fct> sugar, No.sugar, No.sugar, sugar, No.sugar, No.sugar,…
## $ how              <fct> tea bag, tea bag, tea bag, tea bag, tea bag, tea bag,…
## $ where            <fct> chain store, chain store, chain store, chain store, c…
## $ price            <fct> p_unknown, p_variable, p_variable, p_variable, p_vari…
## $ age              <int> 39, 45, 47, 23, 48, 21, 37, 36, 40, 37, 32, 31, 56, 6…
## $ sex              <fct> M, F, F, M, M, M, M, F, M, M, M, M, M, M, M, M, M, F,…
## $ SPC              <fct> middle, middle, other worker, student, employee, stud…
## $ Sport            <fct> sportsman, sportsman, sportsman, Not.sportsman, sport…
## $ age_Q            <fct> 35-44, 45-59, 45-59, 15-24, 45-59, 15-24, 35-44, 35-4…
## $ frequency        <fct> 1/day, 1/day, +2/day, 1/day, +2/day, 1/day, 3 to 6/we…
## $ escape.exoticism <fct> Not.escape-exoticism, escape-exoticism, Not.escape-ex…
## $ spirituality     <fct> Not.spirituality, Not.spirituality, Not.spirituality,…
## $ healthy          <fct> healthy, healthy, healthy, healthy, Not.healthy, heal…
## $ diuretic         <fct> Not.diuretic, diuretic, diuretic, Not.diuretic, diure…
## $ friendliness     <fct> Not.friendliness, Not.friendliness, friendliness, Not…
## $ iron.absorption  <fct> Not.iron absorption, Not.iron absorption, Not.iron ab…
## $ feminine         <fct> Not.feminine, Not.feminine, Not.feminine, Not.feminin…
## $ sophisticated    <fct> Not.sophisticated, Not.sophisticated, Not.sophisticat…
## $ slimming         <fct> No.slimming, No.slimming, No.slimming, No.slimming, N…
## $ exciting         <fct> No.exciting, exciting, No.exciting, No.exciting, No.e…
## $ relaxing         <fct> No.relaxing, No.relaxing, relaxing, relaxing, relaxin…
## $ effect.on.health <fct> No.effect on health, No.effect on health, No.effect o…
View(tea)

# visualize the data.

#pivot_longer(tea, cols = everything(-)) %>% ggplot(aes(value)) + facet_wrap("name", scales = "free") +  geom_bar()

# pivot_longer(tea, cols = everything()) %>%  ggplot(aes(value)) + facet_wrap("name", scales = "free") +   geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))

# error message from previous two functions: filter age variable out

teatea <- dplyr::select(tea, -age)
str(teatea)
## 'data.frame':    300 obs. of  35 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "+60","15-24",..: 4 5 5 2 5 2 4 4 4 4 ...
##  $ frequency       : Factor w/ 4 levels "+2/day","1 to 2/week",..: 3 3 1 3 1 3 4 2 1 1 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
pivot_longer(teatea, cols = everything()) %>% 
  ggplot(aes(value)) + facet_wrap("name", scales = "free") +  geom_bar()

pivot_longer(teatea, cols = everything()) %>%  ggplot(aes(value)) + facet_wrap("name", scales = "free") +   geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))

Have to say that barplottin every variable (except age) gave kind a messy image. I had to pop up the image and use scale out (cmd + - -command with Mac ) quite a lot. Probably it is easiest to do this assignment using same variables used in Exercise because I cannot find any description about whole data set and variables (although many of factors are quite self-explanatory ex. variable sophisticated with levels “Not.sophisticated” and “sophisticated”)

The knitted HTML-version last geom_bar plot was terrible so I’ll try again with fewer variables.

library(ggplot2)

# column names to keep in the dataset & creation of new a dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
tea_time <- dplyr::select(tea, keep_columns)
## Warning: Using an external vector in selections was deprecated in tidyselect 1.1.0.
## ℹ Please use `all_of()` or `any_of()` instead.
##   # Was:
##   data %>% select(keep_columns)
## 
##   # Now:
##   data %>% select(all_of(keep_columns))
## 
## See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
pivot_longer(tea_time, cols = everything()) %>%  ggplot(aes(value)) + facet_wrap("name", scales = "free") +   geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))

MCA. Use Multiple Correspondence Analysis (MCA) on the tea data (or on just certain columns of the data, it is up to you!). Interpret the results of the MCA. You can also explore other plotting options for MCA. Comment on the output of the plots.

library(FactoMineR)
library(swirl)
## 
## | Hi! I see that you have some variables saved in your workspace. To keep
## | things running smoothly, I recommend you clean up before starting swirl.
## 
## | Type ls() to see a list of the variables in your workspace. Then, type
## | rm(list=ls()) to clear your workspace.
## 
## | Type swirl() when you are ready to begin.
library(dplyr)

# column names to keep in the dataset & creation of new a dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
tea_time <- dplyr::select(tea, keep_columns)

# multiple correspondence analysis
mca <- MCA(tea_time, graph = FALSE)

# summary of the model
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6   Dim.7
## Variance               0.279   0.261   0.219   0.189   0.177   0.156   0.144
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519   7.841
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953  77.794
##                        Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.141   0.117   0.087   0.062
## % of var.              7.705   6.392   4.724   3.385
## Cumulative % of var.  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr    cos2
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139   0.003
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626   0.027
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111   0.107
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841   0.127
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979   0.035
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990   0.020
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347   0.102
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459   0.161
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968   0.478
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898   0.141
##                     v.test     Dim.3     ctr    cos2  v.test  
## black                0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            2.867 |   0.433   9.160   0.338  10.053 |
## green               -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone               -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                3.226 |   1.329  14.771   0.218   8.081 |
## milk                 2.422 |   0.013   0.003   0.000   0.116 |
## other                5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag             -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged          -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |
# visualize MCA
plot(mca, invisible=c("ind"), graph.type = "classic", habillage = "quali")

## for curiousity using MCA for the whole data (only age variable excluded)

mca2 <- MCA(teatea, graph = FALSE)

# summary of the model (whole data)
summary(mca2)
## 
## Call:
## MCA(X = teatea, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6   Dim.7
## Variance               0.090   0.082   0.070   0.063   0.056   0.053   0.050
## % of var.              5.838   5.292   4.551   4.057   3.616   3.465   3.272
## Cumulative % of var.   5.838  11.130  15.681  19.738  23.354  26.819  30.091
##                        Dim.8   Dim.9  Dim.10  Dim.11  Dim.12  Dim.13  Dim.14
## Variance               0.048   0.047   0.044   0.041   0.040   0.039   0.037
## % of var.              3.090   3.053   2.834   2.643   2.623   2.531   2.388
## Cumulative % of var.  33.181  36.234  39.068  41.711  44.334  46.865  49.252
##                       Dim.15  Dim.16  Dim.17  Dim.18  Dim.19  Dim.20  Dim.21
## Variance               0.036   0.035   0.034   0.032   0.031   0.031   0.030
## % of var.              2.302   2.275   2.172   2.085   2.013   2.011   1.915
## Cumulative % of var.  51.554  53.829  56.000  58.086  60.099  62.110  64.025
##                       Dim.22  Dim.23  Dim.24  Dim.25  Dim.26  Dim.27  Dim.28
## Variance               0.028   0.027   0.026   0.025   0.025   0.024   0.024
## % of var.              1.847   1.740   1.686   1.638   1.609   1.571   1.524
## Cumulative % of var.  65.872  67.611  69.297  70.935  72.544  74.115  75.639
##                       Dim.29  Dim.30  Dim.31  Dim.32  Dim.33  Dim.34  Dim.35
## Variance               0.023   0.022   0.021   0.020   0.020   0.019   0.019
## % of var.              1.459   1.425   1.378   1.322   1.281   1.241   1.222
## Cumulative % of var.  77.099  78.523  79.901  81.223  82.504  83.745  84.967
##                       Dim.36  Dim.37  Dim.38  Dim.39  Dim.40  Dim.41  Dim.42
## Variance               0.018   0.017   0.017   0.016   0.015   0.015   0.014
## % of var.              1.152   1.092   1.072   1.019   0.993   0.950   0.924
## Cumulative % of var.  86.119  87.211  88.283  89.301  90.294  91.244  92.169
##                       Dim.43  Dim.44  Dim.45  Dim.46  Dim.47  Dim.48  Dim.49
## Variance               0.014   0.013   0.012   0.011   0.011   0.010   0.010
## % of var.              0.891   0.833   0.792   0.729   0.716   0.666   0.660
## Cumulative % of var.  93.060  93.893  94.684  95.414  96.130  96.796  97.456
##                       Dim.50  Dim.51  Dim.52  Dim.53  Dim.54
## Variance               0.009   0.009   0.008   0.007   0.006
## % of var.              0.605   0.584   0.519   0.447   0.390
## Cumulative % of var.  98.060  98.644  99.163  99.610 100.000
## 
## Individuals (the 10 first)
##                  Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3    ctr
## 1             | -0.580  1.246  0.174 |  0.155  0.098  0.012 |  0.052  0.013
## 2             | -0.376  0.522  0.108 |  0.293  0.350  0.066 | -0.164  0.127
## 3             |  0.083  0.026  0.004 | -0.155  0.099  0.015 |  0.122  0.071
## 4             | -0.569  1.196  0.236 | -0.273  0.304  0.054 | -0.019  0.002
## 5             | -0.145  0.078  0.020 | -0.142  0.083  0.019 |  0.002  0.000
## 6             | -0.676  1.693  0.272 | -0.284  0.330  0.048 | -0.021  0.002
## 7             | -0.191  0.135  0.027 |  0.020  0.002  0.000 |  0.141  0.095
## 8             | -0.043  0.007  0.001 |  0.108  0.047  0.009 | -0.089  0.038
## 9             | -0.027  0.003  0.000 |  0.267  0.291  0.049 |  0.341  0.553
## 10            |  0.205  0.155  0.028 |  0.366  0.546  0.089 |  0.281  0.374
##                 cos2  
## 1              0.001 |
## 2              0.021 |
## 3              0.009 |
## 4              0.000 |
## 5              0.000 |
## 6              0.000 |
## 7              0.015 |
## 8              0.006 |
## 9              0.080 |
## 10             0.052 |
## 
## Categories (the 10 first)
##                  Dim.1    ctr   cos2 v.test    Dim.2    ctr   cos2 v.test  
## breakfast     |  0.182  0.504  0.031  3.022 |  0.020  0.007  0.000  0.330 |
## Not.breakfast | -0.168  0.465  0.031 -3.022 | -0.018  0.006  0.000 -0.330 |
## Not.tea time  | -0.556  4.286  0.240 -8.468 |  0.004  0.000  0.000  0.065 |
## tea time      |  0.431  3.322  0.240  8.468 | -0.003  0.000  0.000 -0.065 |
## evening       |  0.276  0.830  0.040  3.452 | -0.409  2.006  0.087 -5.109 |
## Not.evening   | -0.144  0.434  0.040 -3.452 |  0.214  1.049  0.087  5.109 |
## lunch         |  0.601  1.678  0.062  4.306 | -0.408  0.854  0.029 -2.924 |
## Not.lunch     | -0.103  0.288  0.062 -4.306 |  0.070  0.147  0.029  2.924 |
## dinner        | -1.105  2.709  0.092 -5.240 | -0.081  0.016  0.000 -0.386 |
## Not.dinner    |  0.083  0.204  0.092  5.240 |  0.006  0.001  0.000  0.386 |
##                Dim.3    ctr   cos2 v.test  
## breakfast     -0.107  0.225  0.011 -1.784 |
## Not.breakfast  0.099  0.208  0.011  1.784 |
## Not.tea time   0.062  0.069  0.003  0.950 |
## tea time      -0.048  0.054  0.003 -0.950 |
## evening        0.344  1.653  0.062  4.301 |
## Not.evening   -0.180  0.864  0.062 -4.301 |
## lunch          0.240  0.343  0.010  1.719 |
## Not.lunch     -0.041  0.059  0.010 -1.719 |
## dinner         0.796  1.805  0.048  3.777 |
## Not.dinner    -0.060  0.136  0.048 -3.777 |
## 
## Categorical variables (eta2)
##                 Dim.1 Dim.2 Dim.3  
## breakfast     | 0.031 0.000 0.011 |
## tea.time      | 0.240 0.000 0.003 |
## evening       | 0.040 0.087 0.062 |
## lunch         | 0.062 0.029 0.010 |
## dinner        | 0.092 0.000 0.048 |
## always        | 0.056 0.035 0.007 |
## home          | 0.016 0.002 0.030 |
## work          | 0.075 0.020 0.022 |
## tearoom       | 0.321 0.019 0.031 |
## friends       | 0.186 0.061 0.030 |
# visualize MCA (whole data)
plot(mca2, invisible=c("ind"), graph.type = "classic", habillage = "quali")

# in interpration I focus on the first MCA factor map. 

Interpretation. Both dimensions explain variance not so good because first dim explains 15% of variance and second 14% variance.

As factor variables, the tea shop as a place and unpacked as a tea product is contribute strongly in dim1.

So based on this analysis, it would be good follow-up question to look if green unpacked tea from tea shop is clear dimension or consumer choice in this data.

Another dimension is contributed by 1) other (with what tea is consumed), 2) chain store + tea shop (where), 3) tea bag + unpacked (how is consumed).

I wonder if these could represent spesific taste of consumers. Dim1 would characterize this hardcore unpacked green tea consumer and Dim2 this consumer type who is more open to different ways to consume tea.

look support from here:

http://factominer.free.fr/factomethods/multiple-correspondence-analysis.html

Youtube: https://www.youtube.com/watch?v=reG8Y9ZgcaQ

… with

“Draw at least the variable biplot of the analysis.” No idea what should be done.


Week 6: Analysis of longitudinal data

As always I ran short of time. Unfortunately this time I did not make it to part the end of Part I and could not start in Part II.

Meet and Repeat: PART I

  • Print out the (column) names of the data
  • Look at the structure of the data
  • Print out summaries of the variables in the data
  • Pay special attention to the structure of the data
RATS_ <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/rats.txt", sep ="\t", header = T)

names(RATS_)
##  [1] "ID"    "Group" "WD1"   "WD8"   "WD15"  "WD22"  "WD29"  "WD36"  "WD43" 
## [10] "WD44"  "WD50"  "WD57"  "WD64"
str(RATS_)
## 'data.frame':    16 obs. of  13 variables:
##  $ ID   : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ Group: int  1 1 1 1 1 1 1 1 2 2 ...
##  $ WD1  : int  240 225 245 260 255 260 275 245 410 405 ...
##  $ WD8  : int  250 230 250 255 260 265 275 255 415 420 ...
##  $ WD15 : int  255 230 250 255 255 270 260 260 425 430 ...
##  $ WD22 : int  260 232 255 265 270 275 270 268 428 440 ...
##  $ WD29 : int  262 240 262 265 270 275 273 270 438 448 ...
##  $ WD36 : int  258 240 265 268 273 277 274 265 443 460 ...
##  $ WD43 : int  266 243 267 270 274 278 276 265 442 458 ...
##  $ WD44 : int  266 244 267 272 273 278 271 267 446 464 ...
##  $ WD50 : int  265 238 264 274 276 284 282 273 456 475 ...
##  $ WD57 : int  272 247 268 273 278 279 281 274 468 484 ...
##  $ WD64 : int  278 245 269 275 280 281 284 278 478 496 ...
summary(RATS_)
##        ID            Group           WD1             WD8             WD15      
##  Min.   : 1.00   Min.   :1.00   Min.   :225.0   Min.   :230.0   Min.   :230.0  
##  1st Qu.: 4.75   1st Qu.:1.00   1st Qu.:252.5   1st Qu.:255.0   1st Qu.:255.0  
##  Median : 8.50   Median :1.50   Median :340.0   Median :345.0   Median :347.5  
##  Mean   : 8.50   Mean   :1.75   Mean   :365.9   Mean   :369.1   Mean   :372.5  
##  3rd Qu.:12.25   3rd Qu.:2.25   3rd Qu.:480.0   3rd Qu.:476.2   3rd Qu.:486.2  
##  Max.   :16.00   Max.   :3.00   Max.   :555.0   Max.   :560.0   Max.   :565.0  
##       WD22            WD29            WD36            WD43      
##  Min.   :232.0   Min.   :240.0   Min.   :240.0   Min.   :243.0  
##  1st Qu.:267.2   1st Qu.:268.8   1st Qu.:267.2   1st Qu.:269.2  
##  Median :351.5   Median :356.5   Median :360.0   Median :360.0  
##  Mean   :379.2   Mean   :383.9   Mean   :387.0   Mean   :386.0  
##  3rd Qu.:492.5   3rd Qu.:497.8   3rd Qu.:504.2   3rd Qu.:501.0  
##  Max.   :580.0   Max.   :590.0   Max.   :597.0   Max.   :595.0  
##       WD44            WD50            WD57            WD64      
##  Min.   :244.0   Min.   :238.0   Min.   :247.0   Min.   :245.0  
##  1st Qu.:270.0   1st Qu.:273.8   1st Qu.:273.8   1st Qu.:278.0  
##  Median :362.0   Median :370.0   Median :373.5   Median :378.0  
##  Mean   :388.3   Mean   :394.6   Mean   :398.6   Mean   :404.1  
##  3rd Qu.:510.5   3rd Qu.:516.0   3rd Qu.:524.5   3rd Qu.:530.8  
##  Max.   :595.0   Max.   :612.0   Max.   :618.0   Max.   :628.0

pivot_longer()

library(dplyr)
library(tidyr)

# Factor treatment & subject

RATS_$ID <- factor(RATS_$ID)
RATS_$Group <- factor(RATS_$Group)

# Convert to long form

RATSL_ <- pivot_longer(RATS_, cols = -c(ID, Group), 
                      names_to = "WD",
                      values_to = "Weight") %>% 
  mutate(Time = as.integer(substr(WD, 3, 4))) %>%
  arrange(Time)

# Take a glimpse at the BPRSL data
glimpse(RATSL_)
## Rows: 176
## Columns: 5
## $ ID     <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3,…
## $ Group  <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, 1, 1, …
## $ WD     <chr> "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", …
## $ Weight <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 555, 470…
## $ Time   <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, 8, 8, …

Individuals on the plot

#Access the package ggplot2
library(ggplot2)

# Draw the plot
ggplot(RATSL_, aes(x = Time, y = Weight, linetype = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(limits = c(min(RATSL_$Weight), max(RATSL_$Weight)))

Standardise

library(dplyr)
library(tidyr)

# Standardise the variable Weight
RATSL_ <- RATSL_ %>%
  group_by(Time) %>%
  mutate(stdWeight = scale(Weight)) %>%
           ungroup(stdWeight)
  
RATSL_$stdWeight <- as.numeric(RATSL_$stdWeight)

# Glimpse the data
glimpse(RATSL_)
## Rows: 176
## Columns: 6
## Groups: Time [11]
## $ ID        <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2,…
## $ Group     <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, 1, …
## $ WD        <chr> "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1…
## $ Weight    <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 555, …
## $ Time      <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, 8, …
## $ stdWeight <dbl> -1.0011429, -1.1203857, -0.9613953, -0.8421525, -0.8819001, …
# Plot again with the standardised bprs
library(ggplot2)

ggplot(RATSL_, aes(x = Time, y = Weight, linetype = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(name = "normal bprs")

ggplot(RATSL_, aes(x = Time, y = stdWeight, linetype = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(name = "standardized bprs")

Meet and Repeat: PART II

Ran out of time :(